# Multilingual transfer learning
Wav2vec2 Large Xls R 300m Ru
Apache-2.0
This model is a Russian automatic speech recognition (ASR) model fine-tuned on the common_voice_17_0 dataset based on facebook/wav2vec2-xls-r-300m, with a word error rate (WER) of 0.195.
Speech Recognition
Transformers

W
NLPVladimir
56
1
Llama 3.1 SauerkrautLM 70b Instruct
An efficient multilingual large model based on spectrum fine-tuning technology, supporting multiple languages such as German and English
Large Language Model
Transformers Supports Multiple Languages

L
VAGOsolutions
1,568
23
Mgpt 1.3B Ukranian
MIT
A 1.3 billion parameter language model optimized for Ukrainian, deeply fine-tuned based on mGPT-XL(1.3B)
Large Language Model
Transformers Supports Multiple Languages

M
ai-forever
149
4
Indosbert Large
indoSBERT-large is an Indonesian sentence embedding model based on sentence-transformers, which maps sentences and paragraphs into a 256-dimensional vector space, suitable for tasks such as clustering and semantic search.
Text Embedding Other
I
denaya
510
13
FYP ARABIZI
Apache-2.0
This model is a speech recognition model fine-tuned on an unknown dataset based on facebook/wav2vec2-large-xlsr-53, supporting recognition of Arabic dialects (Arabizi).
Speech Recognition
Transformers

F
ali-issa
33
1
English Filipino Wav2vec2 L Xls R Test 09
Apache-2.0
English-Filipino speech recognition model fine-tuned from jonatasgrosman/wav2vec2-large-xlsr-53-english, achieving a WER of 0.5750 on the evaluation set
Speech Recognition
Transformers

E
Khalsuu
29.03k
1
English Filipino Wav2vec2 L Xls R Test 06
Apache-2.0
This model is a fine-tuned version of jonatasgrosman/wav2vec2-large-xlsr-53-english on the filipino_voice dataset, designed for English and Filipino speech recognition tasks.
Speech Recognition
Transformers

E
Khalsuu
24
0
English Filipino Wav2vec2 L Xls R Test 04
Apache-2.0
This model is a fine-tuned version of jonatasgrosman/wav2vec2-large-xlsr-53-english on the filipino_voice dataset, designed for English-Filipino speech recognition tasks.
Speech Recognition
Transformers

E
Khalsuu
21
0
English Filipino Wav2vec2 L Xls R Test
Apache-2.0
English-Filipino speech recognition model fine-tuned based on jonatasgrosman/wav2vec2-large-xlsr-53-english
Speech Recognition
Transformers

E
Khalsuu
18
0
Wav2vec2 Large Xlsr 53 Toy Train Data Masked Audio 10ms
Apache-2.0
Speech recognition model fine-tuned based on facebook/wav2vec2-large-xlsr-53, optimized on 10ms audio masked training data
Speech Recognition
Transformers

W
scasutt
22
0
Wav2vec2 Xls R 300m Urdu
Facebook's 300M parameter speech recognition model, fine-tuned for Urdu, trained on the Common Voice 8.0 Urdu dataset
Speech Recognition
Transformers

W
aasem
16
1
Wav2vec2 Large Xlsr Nahuatl
Apache-2.0
A Nahuatl (ncj dialect) speech recognition model fine-tuned based on facebook/wav2vec2-large-xlsr-53
Speech Recognition
Transformers

W
tyoc213
18
1
Xlm Roberta Base Ft Udpos28 Is
Apache-2.0
This model is an Icelandic POS tagging model based on the XLM-RoBERTa architecture, fine-tuned on the Universal Dependencies dataset v2.8.
Sequence Labeling
Transformers Other

X
wietsedv
15
0
Xlm Roberta Base Finetuned Ner Wolof
A token classification model for Wolof named entity recognition (NER) tasks, fine-tuned on the Wolof portion of the MasakhaNER dataset based on xlm-roberta-base
Sequence Labeling
Transformers Other

X
mbeukman
49
0
Xls R 300m It Cv8
This model is a speech recognition model fine-tuned on the Common Voice Swedish dataset based on facebook/wav2vec2-xls-r-300m, achieving a word error rate (WER) of 1.0286 on the evaluation set.
Speech Recognition
Transformers

X
masapasa
19
1
Wav2vec2 Large Xlsr 53 Polish
Apache-2.0
Polish automatic speech recognition model developed by Facebook, based on Wav2Vec2 architecture and XLSR-53 multilingual pretrained model
Speech Recognition Other
W
facebook
174
3
Gujarati XLM R Base
This model is based on the base variant of XLM-RoBERTa, fine-tuned using Gujarati language and OSCAR monolingual datasets, suitable for Gujarati natural language processing tasks.
Large Language Model
Transformers Other

G
ashwani-tanwar
22
0
Xlm Roberta Base Finetuned Swahili Finetuned Ner Swahili
This model is fine-tuned on the Swahili portion of the MasakhaNER dataset for named entity recognition tasks in Swahili text.
Sequence Labeling
Transformers Other

X
mbeukman
14
1
Gpt2 Medium Italian Embeddings
This model is based on OpenAI's medium-scale GPT-2 model, retrained specifically for Italian vocabulary, making it suitable for Italian text generation tasks.
Large Language Model Other
G
GroNLP
139
3
Featured Recommended AI Models